各种应用程序中的一个关键问题是从公共源域进行适应的域,为此,相对较大的标记数据没有隐私限制,是一个人处置的私人目标域,为此提供了一个私人样本具有很少或没有标记的数据。在对源或目标数据没有隐私限制的回归问题中,基于几种理论保证的差异最小化算法被证明超过了许多其他适应性算法基础。在这种方法的基础上,我们设计了基于私有差异的算法,以适应带有公共标记数据到具有未标记的私人数据的目标域的源域。我们对私人算法的设计和分析非常关键地取决于我们证明的几个关键属性,以平滑地差异,例如其相对于$ \ ell_1 $ norm的平滑度和梯度的灵敏度。我们的解决方案基于Frank-Wolfe和Mirror-Despent算法的私人变体。我们表明,我们的适应算法受益于强有力的概括和隐私保证,并报告了证明其有效性的实验结果。
translated by 谷歌翻译
由于服务器客户的通信和设备计算的瓶颈,大多数研究联合学习的研究都集中在小型模型上。在这项工作中,我们利用各种技术来缓解这些瓶颈,以在联合学习的跨设备中训练更大的语言模型。借助部分模型培训,量化,有效的转移学习和沟通效率优化器的系统应用,我们能够培训$ 21 $ M的参数变压器和20.2美元的参数构象异构体,这些构象异构体与类似大小相同或更好的困惑LSTM具有$ \ sim10 \ times $ $较小的客户到服务器通信成本,比文献中常见的较小的LSTMS $ 11 \%$ $ $ $。
translated by 谷歌翻译
我们研究了在通信约束下的分布式平均值估计和优化问题。我们提出了一个相关的量化协议,该协议的误差保证中的主项取决于数据点的平均偏差,而不仅仅是它们的绝对范围。该设计不需要关于数据集的集中属性的任何先验知识,这是在以前的工作中获得这种依赖所必需的。我们表明,在分布式优化算法中应用提出的协议作为子规则会导致更好的收敛速率。我们还在轻度假设下证明了我们的方案的最佳性。实验结果表明,我们提出的算法在各种任务方面优于现有的平均估计协议。
translated by 谷歌翻译
我们研究了$ N $节点上稳健地估计参数$ P $'ENY ACLY图的问题,其中$ \ gamma $小点的节点可能是对抗的。在展示规范估计器的缺陷之后,我们设计了一种计算上有效的频谱算法,估计$ P $高精度$ \ tilde o(\ sqrt {p(1-p)} / n + \ gamma \ sqrt {p(1-p)} / \ sqrt {n} + \ gamma / n)$ for $ \ gamma <1/60 $。此外,我们为所有$ \ Gamma <1/2 $,信息定理限制提供了一种效率低下的算法。最后,我们证明了几乎匹配的统计下限,表明我们的算法的错误是最佳的对数因子。
translated by 谷歌翻译
联合学习是一种机器学习技术,可实现分散数据的培训。最近,由于对隐私和安全的焦点增加,联邦学习已成为一个活跃的研究领域。鉴于此,已开发和释放各种开源联合学习库。我们介绍FedJax,一个基于JAX的开源库,用于联合学习模拟,强调易于使用的研究。利用其简单的原语来实现联合学习算法,预先包装的数据集,模型和算法以及快速仿真速度,FedJax旨在使开发和评估联合算法更快,更容易研究研究人员。我们的基准结果表明,FedJax可用于在几分钟内与EMNIST数据集上的联合平均培训模型,并且堆栈溢出数据集在大约一个小时使用TPU的标准超参数。
translated by 谷歌翻译
我们提出并分析了算法,以解决用户级差分隐私约束下的一系列学习任务。用户级DP仅保证只保证个人样本的隐私,而是保护用户的整个贡献($ M \ GE 1 $ Samples),而不是对信息泄漏提供更严格但更现实的保护。我们表明,对于高维平均估计,具有平稳损失,随机凸优化和学习假设类别的经验风险最小化,具有有限度量熵,隐私成本随着用户提供的$ O(1 / \ SQRT {M})$减少更多样本。相比之下,在增加用户数量$ N $时,隐私成本以较快的价格降低(1 / n)$率。我们将这些结果与下界相提并论,显示了我们算法的最低限度估计和随机凸优化的算法。我们的算法依赖于私有平均估计的新颖技术,其任意维度与误差缩放为浓度半径$ \ tai $的分布而不是整个范围。
translated by 谷歌翻译
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
translated by 谷歌翻译
Federated Averaging (FEDAVG) has emerged as the algorithm of choice for federated learning due to its simplicity and low communication cost. However, in spite of recent research efforts, its performance is not fully understood. We obtain tight convergence rates for FEDAVG and prove that it suffers from 'client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.As a solution, we propose a new algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the 'client-drift' in its local updates. We prove that SCAFFOLD requires significantly fewer communication rounds and is not affected by data heterogeneity or client sampling. Further, we show that (for quadratics) SCAFFOLD can take advantage of similarity in the client's data yielding even faster convergence. The latter is the first result to quantify the usefulness of local-steps in distributed optimization.
translated by 谷歌翻译
A key learning scenario in large-scale applications is that of federated learning, where a centralized model is trained based on data originating from a large number of clients. We argue that, with the existing training and inference, federated models can be biased towards different clients. Instead, we propose a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions. We further show that this framework naturally yields a notion of fairness. We present data-dependent Rademacher complexity guarantees for learning with this objective, which guide the definition of an algorithm for agnostic federated learning. We also give a fast stochastic optimization algorithm for solving the corresponding optimization problem, for which we prove convergence bounds, assuming a convex loss function and hypothesis set. We further empirically demonstrate the benefits of our approach in several datasets. Beyond federated learning, our framework and algorithm can be of interest to other learning scenarios such as cloud computing, domain adaptation, drifting, and other contexts where the training and test distributions do not coincide. MotivationA key learning scenario in large-scale applications is that of federated learning. In that scenario, a centralized model is trained based on data originating from a large number of clients, which may be mobile phones, other mobile devices, or sensors (Konečnỳ, McMahan, Yu, Richtárik, Suresh, and Bacon, 2016b;Konečnỳ, McMahan, Ramage, and Richtárik, 2016a). The training data typically remains distributed over the clients, each with possibly unreliable or relatively slow network connections.Federated learning raises several types of issues and has been the topic of multiple research efforts. These include systems, networking and communication bottleneck problems due to frequent exchanges between the central server and the clients . To deal with such problems, suggested an averaging technique that consists of transmitting the central model to a subset of clients, training it with the data locally available, and averaging the local updates. Smith et al. (2017) proposed to further leverage the relationship between clients, assumed to be known, and cast
translated by 谷歌翻译
Federated Learning is a machine learning setting where the goal is to train a highquality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones, and communication efficiency is of the utmost importance. In this paper, we propose two ways to reduce the uplink communication costs: structured updates, where we directly learn an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, where we learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling before sending it to the server. Experiments on both convolutional and recurrent networks show that the proposed methods can reduce the communication cost by two orders of magnitude. * Work performed while also affiliated with University of Edinburgh.
translated by 谷歌翻译